A Parameterized Theory of PAC Learning

نویسندگان

چکیده

Probably Approximately Correct (i.e., PAC) learning is a core concept of sample complexity theory, and efficient PAC learnability often seen as natural counterpart to the class P in classical computational complexity. But while nascent theory parameterized has allowed us push beyond P-NP "dichotomy" identify exact boundaries tractability for numerous problems, there no analogue domain that could learnability. As our contribution, we fill this gap by developing which allows shed new light on several recent results incorporated elements Within not one but two notions fixed-parameter both form distinct counterparts FPT - at center paradigm develop machinery required exclude We then showcase applications refined CNF DNF well range problems graphs.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computational Learning Theory: PAC Model

Data-driven learning has shown great promise in a number of practical applications, ranging from financial forecasting to medical diagnosis, computer vision, search engines, recommendation systems etc. They are particularly effective where concepts are somewhat fuzzy and difficult to model precisely and rigorously. For instance, how does Netflix recommend movies for you to watch? Prescribing ou...

متن کامل

une approche PAC-Bayésienne PAC-Bayesian Statistical Learning Theory

This PhD thesis is a mathematical study of the learning task – specifically classification and least square regression – in order to better understand why an algorithm works and to propose more efficient procedures. The thesis consists in four papers. The first one provides a PAC bound for the L generalization error of methods based on combining regression procedures. This bound is tight to the...

متن کامل

PAC-Bayesian Theory for Transductive Learning

We propose a PAC-Bayesian analysis of the transductive learning setting by proposing a family of new bounds on the generalization error. Inductive Learning Training set We draw m examples i.i.d. from a distribution D on X×{−1,+1}: S = {(x1, y1), (x2, y2), . . . , (xm, ym)} ∼ D . Task of an inductive learner Using S, learn a classifier h : X 7→ {−1,+1} that has a low generalization risk on new e...

متن کامل

A. Pac Learning

• Let X=R with orthonormal basis (e1, e2) and consider the set of concepts defined by the area inside a right triangle ABC with two sides parallel to the axes, with −−→ AB/AB = e1 and −→ AC/AC = e2, and AB/AC = α for some positive real α ∈ R+. Show, using similar methods to those used in the lecture slides for the axis-aligned rectangles, that this class can be (ǫ, δ)PAC-learned from training d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i6.25837